234 research outputs found
The development of a combustion temperature standard for the calibration of optical diagnostic techniques
This thesis describes the development and evaluation of a high-temperature combustion
standard. This comprises a McKenna burner premixed flame, together with a full
assessment of its temperature, stability and reproducibility. I have evaluated three
techniques for high-accuracy flame thermometry: Modulated Emission in Gases
(MEG), Rayleigh scattering thermometry and photo-acoustic thermometry.
MEG: Analysis shows that MEG is not usable in this application because the sharp
spectral features of the absorption coefficient of gases are represented within MEG
theory as an average absorption coefficient over the optical detection bandwidth. A
secondary difficulty arises from the lack of high power lasers operating at wavelengths
that coincides with molecular absorption lines in the hot gas.
Rayleigh Scattering: Applying corrections for the temperature-dependence of the
scattering cross-section, it has been possible to determine the temperature of the
combustion standard with an uncertainty of approximately 1%. The temperature
dependence of the scattering cross-section arises from changes in the mean molecular
polarisability and anisotropy and can amount to 2% between flame and room
temperatures. Using a pulse Nd-YAG laser operating at 532 nm and high linearity
silicon detectors, the Rayleigh scattering experimental system has been optimised.
Temperatures measured over a three-month interval are shown to be reproducible to
better than 0.4% demonstrating the suitability of the McKenna burner as a combustion
standard.
Photo-Acoustic: By measuring the transit time of a spark-induced sound wave past two
parallel probe beams, the temperature has been determined with an uncertainty of
approximate 1%.
Flame temperatures measured by the photo-acoustic and Rayleigh scattering
thermometry system show good agreement. For high airflow rates the agreement is
better than 1% of temperature, but for low airflow rates, photo-acoustic temperatures are
approximately 3.6% higher than the Rayleigh temperatures. Further work is needed to
understand this discrepancy
On the importance of cellular composition in human brain transcriptomics
The human brain consists of billions of cells, classifiable into hundreds of distinct cell-types and -subtypes. However, as studying cells or cell-types in isolation has proven challenging, most functional genomic assays are performed at the bulk level, i.e., pool signal across a heterogenous mass of cells. Such bulk assays provide an aggregated measure: that of the signal within the bulk’s constituent cell-types, weighted by their relative abundances. In this thesis, I explore the role cellular composition plays in brain transcriptome studies, and argue that its quantification and control is critical for correctly interpreting results.
I begin by evaluating in silico methods for estimating cellular composition from bulk RNA-seq output. Using a diverse range of samples with known composition, I show that accurate estimation is achieved by combining partial deconvolution algorithms with biologically-relevant signatures, and confirm these findings in real transcriptome data using the goodness-of-fit metric.
Having established that composition can be estimated in brain transcriptomes, I next demonstrate the importance of doing so. Through simulation, I show that small composition differences across samples (~5%) can lead to hundreds of false positives in differential expression, but modelling composition as a covariate is sufficient to control it. I apply these findings to a recent bulk brain resource of Autism vs. Control RNA-seq, and propose that the majority of reported differentially-expressed genes are driven by composition rather than dysregulation.
To extend up these findings, I use data from recent experimental methods to explore brain cell-type-specific gene expression. I characterise 9 adult human brain samples at the single-nucleus level, exploring the diversity in cell-types and their perturbation in autism. Rich time-course data spanning the prenatal period to adulthood are also evaluated to explore how dynamic, cell-type-specific regulation across development associates with autism and other brain traits.
The work in this thesis thus represents a critical re-evaluation of past brain transcriptome data, whilst also looking forward towards new analytical approaches and experimental methods
Re-estimation of argon isotope ratios leading to a revised estimate of the Boltzmann constant
In 2013, NPL, SUERC and Cranfield University published an estimate for the Boltzmann constant [1] based on a measurement of the limiting low-pressure speed of sound in argon gas. Subsequently, an extensive investigation by Yang et al [2] revealed that there was likely to have been an error in the estimate of the molar mass of the argon used in the experiment. Responding to [2], de Podesta et al revised their estimate of the molar mass [3]. The shift in the estimated molar mass, and of the estimate of kB, was large: -2.7 parts in 106, nearly four times the original uncertainty estimate. The work described here was undertaken to understand the cause of this shift and our conclusion is that the original samples were probably contaminated with argon from atmospheric air. In this work we have repeated the measurement reported in [1] on the same gas sample that was examined in [2, 3]. However in this work we have used a different technique for sampling the gas that has allowed us to eliminate the possibility of contamination of the argon samples. We have repeated the sampling procedure three times, and examined samples on two mass spectrometers. This procedure confirms the isotopic ratio estimates of Yang et al [2] but with lower uncertainty, particularly in the relative abundance ratio R38:36. Our new estimate of the molar mass of the argon used in Isotherm 5 in [1] is 39.947 727(15) g mol-1 which differs by +0.50 parts in 106 from the estimate 39.947 707(28) g mol-1 made in [3]. This new estimate of the molar mass leads to a revised estimate of the Boltzmann constant of kB = 1.380 648 60 (97) × 10−23 J K−1 which differs from the 2014 CODATA value by +0.05 parts in 106.
The development of a combustion temperature standard for the calibration of optical diagnostic techniques
This thesis describes the development and evaluation of a high-temperature combustion standard. This comprises a McKenna burner premixed flame, together with a full assessment of its temperature, stability and reproducibility. I have evaluated three techniques for high-accuracy flame thermometry: Modulated Emission in Gases (MEG), Rayleigh scattering thermometry and photo-acoustic thermometry. MEG: Analysis shows that MEG is not usable in this application because the sharp spectral features of the absorption coefficient of gases are represented within MEG theory as an average absorption coefficient over the optical detection bandwidth. A secondary difficulty arises from the lack of high power lasers operating at wavelengths that coincides with molecular absorption lines in the hot gas. Rayleigh Scattering: Applying corrections for the temperature-dependence of the scattering cross-section, it has been possible to determine the temperature of the combustion standard with an uncertainty of approximately 1%. The temperature dependence of the scattering cross-section arises from changes in the mean molecular polarisability and anisotropy and can amount to 2% between flame and room temperatures. Using a pulse Nd-YAG laser operating at 532 nm and high linearity silicon detectors, the Rayleigh scattering experimental system has been optimised. Temperatures measured over a three-month interval are shown to be reproducible to better than 0.4% demonstrating the suitability of the McKenna burner as a combustion standard. Photo-Acoustic: By measuring the transit time of a spark-induced sound wave past two parallel probe beams, the temperature has been determined with an uncertainty of approximate 1%. Flame temperatures measured by the photo-acoustic and Rayleigh scattering thermometry system show good agreement. For high airflow rates the agreement is better than 1% of temperature, but for low airflow rates, photo-acoustic temperatures are approximately 3.6% higher than the Rayleigh temperatures. Further work is needed to understand this discrepancy.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Constraints on quark energy loss from Drell-Yan data
A leading-order analysis of E866/NuSea and NA3 Drell-Yan data in nuclei is
carried out. At Fermilab energy, the large uncertainties in the amount of sea
quark shadowing prohibit clarifying the origin of the nuclear dependence
observed experimentally. On the other hand, the small shadowing contribution to
the Drell-Yan process in pi- A collisions at SPS allows one to set tight
constraints on the energy loss of fast quarks in nuclear matter. We find the
transport coefficient to be q = 0.24+/-0.18 GeV/fm^2 that corresponds to a mean
energy loss per unit length -dE/dz = 0.20+/-0.15 GeV/fm for E > 50 GeV quarks
in a large (A=200) nucleus.Comment: 14 pages, 2 figures. New modelling of the quenching, conclusions
unchanged. Accepted for publication in Phys. Lett.
Effect of mattress deflection on CPR quality assessment for older children and adolescents
Appropriate chest compression (CC) depth is associated with improved CPR outcome. CCs provided in hospital are often conducted on a compliant mattress. The objective was to quantify the effect of mattress compression on the assessment of CPR quality in children.
Methods: A force and deflection sensor (FDS) was used during CPR in the Pediatric Intensive Care Unit and Emergency Department of a children's hospital. The sensor was interposed between the chest of the patient and hands of the rescuer and measured CC depth. Following CPR event, each event was reconstructed with a manikin and an identical mattress/backboard/patient configuration. CCs were performed using FDS on the sternum and a reference accelerometer attached to the spine of the manikin, providing a means to Calculate the mattress deflection.
Results: Twelve CPR events with 14,487 CC (11 patients, median age 14.9 years) were recorded and reconstructed: 9 on ICU beds (9296 CC), 3 on stretchers (5191 CC). Measured mean CC depth during CPR was 47 +/- 8 mm on ICU beds, and 45 +/- 7 mm on stretcher beds with overestimation of 13 +/- 4 mm and 4 +/- 1 mm, respectively, due to mattress compression. After adjusting for this, the proportion of CC that met the CPR guidelines decreased from 88.4 to 31.8% on ICU beds (p < 0.001), and 86.3 to 64.7% on stretcher (p < 0.001 The proportion of appropriate depth CC was significantly smaller on ICU beds (p < 0.001).
Conclusion: CC conducted on a non-rigid surface may not be deep enough. FDS may overestimate CC depth by 28% on ICU beds, and 10% on stretcher beds
A revised edition of the readiness to change questionnaire (treatment version)
The UK Alcohol Treatment Trial provided an opportunity to examine the factor structure of the Readiness to Change Questionnaire-Treatment Version (RCQ[TV]) in a large sample (N = 742) of individuals in treatment for alcohol problems who were given the RCQ[TV] at baseline, 3-months and 12-months follow-up. Confirmatory factor analysis of the previously reported factor structure (5 items for each of Precontemplation, Contemplation and Action scales) resulted in a relatively poor fit to the data. Removal of one item from each of the scales resulted in a 12-item instrument for which goodness-of-fit indices were improved, without loss of internal consistency of the three scales, on all three measurement occasions. Inspection of relationships between stage allocation by the new instrument and negative alcohol outcome expectancies provided evidence of improved construct validity for the revised edition of the RCQ[TV]. There was also a strong relationship between stage allocation at 3-months follow-up and outcome of treatment at 12 months. The revised edition of the RCQ[TV] offers researchers and clinicians a shorter and improved measurement of stage of change in the alcohol treatment population
A posteriori inclusion of parton density functions in NLO QCD final-state calculations at hadron colliders: The APPLGRID Project
A method to facilitate the consistent inclusion of cross-section measurements
based on complex final-states from HERA, TEVATRON and the LHC in proton parton
density function (PDF) fits has been developed. This can be used to increase
the sensitivity of LHC data to deviations from Standard Model predictions. The
method stores perturbative coefficients of NLO QCD calculations of final-state
observables measured in hadron colliders in look-up tables. This allows the
posteriori inclusion of parton density functions (PDFs), and of the strong
coupling, as well as the a posteriori variation of the renormalisation and
factorisation scales in cross-section calculations.
The main novelties in comparison to original work on the subject are the use
of higher-order interpolation, which substantially improves the trade-off
between accuracy and memory use, and a CPU and computer memory optimised way to
construct and store the look-up table using modern software tools.
It is demonstrated that a sufficient accuracy on the cross-section
calculation can be achieved with reasonably small look-up table size by using
the examples of jet production and electro-weak boson (Z, W) production in
proton-proton collisions at a center-of-mass energy of 14 TeV at the LHC.
The use of this technique in PDF fitting is demonstrated in a PDF-fit to HERA
data and simulated LHC jet cross-sections as well as in a study of the jet
cross-section uncertainties at various centre-of-mass energies
- …